353 research outputs found

    Produktion eines Werbespots für das Radio Triquency

    Get PDF
    Entwicklung und Produktion eines Werbespots für das Radio Triquency. Aus der Perspektive des Produzenten mit erläuterung zur Aufgaben des Producers, Produzent und Produktionsleiter

    Produktion eines Werbespots für das Radio Triquency

    Get PDF
    Entwicklung und Produktion eines Werbespots für das Radio Triquency. Aus der Perspektive des Produzenten mit erläuterung zur Aufgaben des Producers, Produzent und Produktionsleiter

    Novel algorithms and hardware architectures for Montgomery Multiplication over GF(p)

    Get PDF
    This report describes the design and implementation results in FPGAs of a scalable hardware architecture for computing modular multiplication in prime fields GF(pp), based on the Montgomery multiplication (MM) algorithm. Starting from an existing digit-serial version of the MM algorithm, a novel {\it digit-digit} based MM algorithm is derived and two hardware architectures that compute that algorithm are described. In the proposed approach, the input operands (multiplicand, multiplier and modulus) are represented using as radix β=2k\beta = 2^k. Operands of arbitrary size can be multiplied with modular reduction using almost the same hardware since the multiplier\u27s kernel module that performs the modular multiplication depends only on kk. The novel hardware architectures proposed in this paper were verified by modeling them using VHDL and implementing them in the Xilinx FPGAs Spartan and Virtex5. Design trade-offs are analyzed considering different operand sizes commonly used in cryptography and different values for kk. The proposed designs for MM are well suited to be implemented in modern FPGAs, making use of available dedicated multiplier and memory blocks reducing drastically the FPGA\u27s standard logic while keeping an acceptable performance compared with other implementation approaches. From the Virtex5 implementation, the proposed MM multiplier reaches a throughput of 242Mbps using only 219 FPGA slices and achieving a 1024-bit modular multiplication in 4.21μ\musecs

    An FPGA-based programmable processor for bilinear pairings

    Get PDF
    Bilinear pairings on elliptic curves are an active research field in cryptography. First cryptographic protocols based on bilinear pairings were proposed by the year 2000 and they are promising solutions to security concerns in different domains, as in Pervasive Computing and Cloud Computing. The computation of bilinear pairings that relies on arithmetic over finite fields is the most time-consuming in Pairing-based cryptosystems. That has motivated the research on efficient hardware architectures that improve the performance of security protocols. In the literature, several works have focused in the design of custom hardware architectures for pairings, however, flexible designs provide advantages due to the fact that there are several types of pairings and algorithms to compute them. This work presents the design and implementation of a novel programmable cryptoprocessor for computing bilinear pairings over binary fields in FPGAs, which is able to support different pairing algorithms and parameters as the elliptic curve, the tower field and the distortion map. The results show that high flexibility is achieved by the proposed cryptoprocessor at a competitive timing and area usage when it is compared to custom designs for pairings defined over singular/supersingular elliptic curves at a 128-bit security level

    A data integrity verification service for cloud storage based on building blocks

    Get PDF
    Cloud storage is a popular solution for organizations and users to store data in ubiquitous and cost-effective manner. However, violations of confidentiality and integrity are still issues associated to this technology. In this context, there is a need for tools that enable organizations/users to verify the integrity of their information stored in cloud services. In this paper, we present the design and implementation of an efficient service based on provable data possession cryptographic model, which enables organizations to verify, on-demand, the data integrity without retrieving files from the cloud. The storage and cryptographic components have been developed in the form of building blocks, which are deployed on the user-side using the Manager/Worker pattern that favors exploiting parallelism when executing data possession challenges. An experimental evaluation in a private cloud revealed the efficacy of launching integrity verification challenges to cloud storage services and the feasibility of applying containerized task parallel scheme that significantly improves the performance of the data possession proof service in real-world scenarios in comparison with the implementation of the original possession data proof scheme.This work has been partially funded by GRANT Fondo Sectorial Mexican Space Agency-CONACYT Num. 262891 and by EU under the COST programme Action IC1305, Network for Sustainable Ultrascale Computing (NESUS)

    Estudio florístico de la parte central de la barranca nenetzingo, municipio de Ixtapan de la sal, Estado de México

    Get PDF
    No se cuenta todavía con un inventario completo de plantas que existen en el territorio mexicano. Aún quedan grupos por conocer y zonas del país sin explorar. En tal situación se encuentra la barranca Nenetzingo, Ixtapán de la Sal, Estado de México. Con el objetivo de contribuir con el conocimiento de la fl ora se elaboró un listado de las especies, donde se incluyen algunos aspectos como: forma biológica, fenología reproductiva y abundancia. El trabajo consistió en colectar, secar y prensar todas las especies del área de estudio durante 21 meses, para su posterior identifi cación y con ello se elaboró el listado. Como resultados se reportan 362 especies distribuidas en 89 familias y 248 géneros, se reportan 17 especies por primera vez para el estado. Las familias más importantes a nivel de género son: Asteraceae, Poaceae y Fabaceae con 12.9, 10.6 y 5.2% de géneros cada una; de igual forma en especies 14.1, 9.4 y 6.4% respectivamente. La mayoría de las plantas son hierbas y arbustos representando el 82.6% del total de las especies identifi cadas. Los periodos de fl oración y fructifi cación de la fl ora son verano y otoño sumando 67.6% y 69.4% respectivamente. Gran parte de las plantas son abundantes (47.8%), seguida por frecuentes (29.3%) y escasas (22.9%). Finalmente se observa que la zona de estudio tiene un número considerable de diversidad refl ejada en el número de especies; sin embargo, de acuerdo al análisis de la fl ora se detecta que es un lugar perturbado requiriendo acciones para su conservación

    On the efficient delivery and storage of IoT data in edge-fog-cloud environments

    Get PDF
    This article belongs to the Special Issue Internet of Things, Sensing and Cloud ComputingCloud storage has become a keystone for organizations to manage large volumes of data produced by sensors at the edge as well as information produced by deep and machine learning applications. Nevertheless, the latency produced by geographic distributed systems deployed on any of the edge, the fog, or the cloud, leads to delays that are observed by end-users in the form of high response times. In this paper, we present an efficient scheme for the management and storage of Internet of Thing (IoT) data in edge-fog-cloud environments. In our proposal, entities called data containers are coupled, in a logical manner, with nano/microservices deployed on any of the edge, the fog, or the cloud. The data containers implement a hierarchical cache file system including storage levels such as in-memory, file system, and cloud services for transparently managing the input/output data operations produced by nano/microservices (e.g., a sensor hub collecting data from sensors at the edge or machine learning applications processing data at the edge). Data containers are interconnected through a secure and efficient content delivery network, which transparently and automatically performs the continuous delivery of data through the edge-fog-cloud. A prototype of our proposed scheme was implemented and evaluated in a case study based on the management of electrocardiogram sensor data. The obtained results reveal the suitability and efficiency of the proposed scheme.This research was funded by the project 41756 "Plataforma tecnológica para la gestión, aseguramiento, intercambio y preservación de grandes volúmenes de datos en salud y construcción de un repositorio nacional de servicios de análisis de datos de salud" by the PRONACES-CONACYT

    Comparing creatinine clearance in 24 hour urine and Cockcroft-Gault formula to determine glomerular filtration rate in pregnant women attending a Lima hospital

    Get PDF
    La filtración glomerular se calcula por la depuración de creatinina endógena (DCE) en orina de 24 horas, con limitaciones en su recolección y dificultades para los pacientes. Sin embargo, existen fórmulas propuestas para estimar esta función renal. Objetivo. Aplicar la fórmula Cockcroft–Gault de filtración glomerular y compararla con método químico colorimétrico en gestantes. Diseño. Estudio observacional, correlacional, prospectivo y transversal. Lugar. Laboratorio central, Hospital Nacional Sergio Bernales de Lima, Perú. Participantes. Mujeres gestantes. Métodos. Previo consentimiento informado, se procesaron muestras de sangre y orina de 24 horas de 92 gestantes, entre noviembre 2015 y enero 2016. Se utilizó el coeficiente de correlación de Pearson entre los resultados DCE de la fórmula Cockcroft–Gault y la obtenida en suero-orina de 24 horas. Resultados. La muestra tuvo una distribución normal analizada por el estimador Smirnov-Kolmogorov. El promedio de la DCE en orina de 24 horas fue 73,65 ± 19,85 mL/min, la obtenida por la fórmula Cockcroft-Gault fue 99,82 ± 18,75 mL/min, con diferencia significativa a la prueba t para muestras relacionadas (p < 0,000), y la correlación entre dichos métodos de laboratorio fue baja (r=0,561) en todas las gestantes y por trimestre, mostrando falta de correlación con la prueba coeficiente de correlación-concordancia de Lin (ccc) (p < 0,01). La sensibilidad (S) fue 0,50; especificidad (E) 0,591, el valor predictivo positivo (VPP) 0,212 y el negativo (VPN) 0,881. Conclusiones. La DCE obtenida por fórmula Cockcroft–Gault con la DCE suero-orina de 24 horas en gestantes tuvo baja correlación (r = ⦋0,4 a 0,67⦌), con niveles bajos de S, E VPP y VPN, por lo que no es recomendable su uso en gestantes.Glomerular filtration rate (GFR) is calculated by the endogenous creatinine clearance (DCE) in 24-hour urine, with limited collection and difficulties for patients. There are formulas proposed to estimate renal function. Objective. To use the Cockcroft-Gaul formula of glomerular filtration and to compare it with the chemical colorimetric method in pregnant women. Design. Observational, correlational, prospective and transversal study. Setting. Central Laboratory, Sergio Bernales National Hospital, Lima, Peru. Participants. Pregnant women. Methods. Using prior informed consent blood samples and 24-hour urine of 92 pregnant women between November 2015 and January 2016 were processed. The Pearson correlation coefficient was used to compare the DCE results obtained with the Cockcroft- Gault formula and the serum-24 hours urine. Results. The sample had a normal distribution analyzed by Kolmogorov-Smirnov estimator. The average DCE in 24 hours urine was 73.65 ± 19.85 mL /min, and that obtained by the Cockcroft-Gault formula was 99.82 ± 18.75 mL/min, with t test significant difference in related samples (p <0.000); the correlation between these laboratory methods was low (r = 0.561) in all pregnant women and by trimesters, showing lack of correlation with the coefficient of the Lin correlation-matching (ccc) (p <0.01). Sensitivity (S) was 0.50, specificity (Sp) 0.591, positive predictive value (PPV) 0.212 and negative predictive value (NPV) 0.881. Conclusions. The DCE obtained by Cockcroft-Gault with the DCE serum-24 hours urine in pregnant women had low correlation (r = [0.4 to 0.67]) with low levels of S, Sp, PPV and NPV, so it is not recommended for use in pregnant women

    A policy-based containerized filter for secure information sharing in organizational environments

    Get PDF
    In organizational environments, sensitive information is unintentionally exposed and sent to the cloud without encryption by insiders that even were previously informed about cloud risks. To mitigate the effects of this information privacy paradox, we propose the design, development and implementation of SecFilter, a security filter that enables organizations to implement security policies for information sharing. SecFilter automatically performs the following tasks: (a) intercepts files before sending them to the cloud; (b) searches for sensitive criteria in the context and content of the intercepted files by using mining techniques; (c) calculates the risk level for each identified criterion; (d) assigns a security level to each file based on the detected risk in its content and context; and (e) encrypts each file by using a multi-level security engine, based on digital envelopes from symmetric encryption, attribute-based encryption and digital signatures to guarantee the security services of confidentiality, integrity and authentication on each file at the same time that access control mechanisms are enforced before sending the secured file versions to cloud storage. A prototype of SecFilter was implemented for a real-world file sharing application that has been deployed on a private cloud. Fine-tuning of SecFilter components is described and a case study has been conducted based on document sharing of a well-known repository (MedLine corpus). The experimental evaluation revealed the feasibility and efficiency of applying a security filter to share information in organizational environmentsThis work has been partially supported by the Spanish “Ministerio de Economia y Competitividad” under the project grant TIN2016-79637-P “Towards Unification of HPC and Big Dataparadigms”
    corecore